Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

نویسندگان

  • Laura Barca
  • Giovanni Pezzulo
  • Marianna Castrataro
  • Pasquale Rinaldi
  • Maria Cristina Caselli
چکیده

Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Early use of phonological codes in deaf readers: An ERP study.

Previous studies suggest that deaf readers use phonological information of words when it is explicitly demanded by the task itself. However, whether phonological encoding is automatic remains controversial. The present experiment examined whether adult congenitally deaf readers show evidence of automatic use of phonological information during visual word recognition. In an ERP masked priming le...

متن کامل

Using talking lights illumination-based communication networks to enhance word comprehension by people who are deaf or hard of hearing.

This article details a new method that has been developed to transmit auditory and visual information to people who are deaf or hard of hearing. In this method, ordinary fluorescent lighting is modulated to carry an assistive data signal throughout a room while causing no flicker or other distracting visual problems. In limited trials with participants who are deaf or hard of hearing, this assi...

متن کامل

Nonverbal Semantic Processing Disrupts Visual Word Recognition in Healthy Adults

Two experiments examined the effect of semantic interference on visual lexical decision (vLD) in normal skilled readers. Experiment 1 employed a dual-task paradigm to test whether nonverbal semantic processing disrupts visual word recognition when the orthographic structure of words and non-words is controlled. Experiment 2 employed the same paradigm to test whether participants strategically s...

متن کامل

Early Event-related Potential Effects of Syllabic Processing during Visual Word Recognition

A number of behavioral studies have suggested that syllables might play an important role in visual word recognition in some languages. We report two event-related potential (ERP) experiments using a new paradigm showing that syllabic units modulate early ERP components. In Experiment 1, words and pseudowords were presented visually and colored so that there was a match or a mismatch between th...

متن کامل

Cued Speech automatic recognition in normal-hearing and deaf subjects

This article discusses the automatic recognition of Cued Speech in French based on hidden Markov models (HMMs). Cued Speech is a visual mode which, by using hand shapes in different positions and in combination with lip patterns of speech, makes all the sounds of a spoken language clearly understandable to deaf people. The aim of Cued Speech is to overcome the problems of lipreading and thus en...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 8  شماره 

صفحات  -

تاریخ انتشار 2013